- If ChatGPT produces AI-generated code for your app, who does it really belong to?
- The best iPhone power banks of 2024: Expert tested and reviewed
- The best NAS devices of 2024: Expert tested
- Four Ways to Harden Your Code Against Security Vulnerabilities and Weaknesses
- I converted this Windows 11 Mini PC into a Linux workstation - and didn't regret it
Google Offers Bug Bounties for Generative AI Security Vulnerabilities
Google’s Vulnerability Reward Program offers up to $31,337 for discovering potential hazards. Google joins OpenAI and Microsoft in rewarding AI bug hunts.
Google expanded its Vulnerability Rewards Program to include bugs and vulnerabilities that could be found in generative AI. Specifically, Google is looking for bug hunters for its own generative AI, products such as Google Bard, which is available in many countries, or Google Cloud’s Contact Center AI, Agent Assist.
“We believe this will incentivize research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone,” Google’s Vice President of Trust and Safety Laurie Richardson and Vice President of Privacy, Safety and Security Engineering Royal Hansen wrote in an Oct. 26 blog post. “We’re also expanding our open source security work to make information about AI supply chain security universally discoverable and verifiable.”
Jump to:
Google’s bug bounty program: Limitations and rewards
There are limitations as to what counts as a vulnerability in generative AI; a complete list of what vulnerabilities Google considers in scope or out of scope for the Vulnerability Rewards Program is in this Google security blog.
Generative AI introduces risks traditional computing doesn’t; these risks include unfair bias, model manipulation and misinterpretations of data, Richardson and Hansen wrote. Notably, AI “hallucinations” — misinformation generated within a private browsing session — do not count as vulnerabilities for the purposes of the Vulnerability Rewards Program. Attacks that expose sensitive information, change the state of a Google user’s account without their consent or provide backdoors into a generative AI model are within scope.
Ultimately, anyone participating in the bug bounty needs to prove that the vulnerability they discover could “pose a compelling attack scenario or feasible path to Google or user harm,” according to the Google security blog.
Possible Google AI bug bounty rewards
Rewards for the Vulnerability Rewards Program range from $100 to $31,337, depending on the type of vulnerability. Details on rewards, payouts can be found on Google’s Bug Hunters site.
Other bug bounties and common attack types in generative AI
OpenAI, Microsoft and other organizations offer bug bounties for white hat hackers who find vulnerabilities in generative AI systems. Microsoft offers between $2,000 and $15,000 for qualifying bugs. OpenAI’s bug bounty program will give between $200 and $20,000.
SEE: IBM X-Force researchers found phishing emails written by people are slightly more likely to get clicks than those written by ChatGPT. (TechRepublic)
In an October 26 report, HackerOne and OWASP found that the most common vulnerability in generative AI was prompt injection (i.e., using prompts to make the AI model do something it was not intended to do), followed by insecure output handling (i.e., when LLM output is accepted without scrutiny) and the manipulation of training data.
How to learn to use generative AI
Developers and security researchers just starting out with generative AI have plenty of options when it comes to learning how to use it, from experimenting with free applications such as ChatGPT to taking professional courses. DeepLearning.AI has courses at both beginner and advanced levels for professionals who want to learn how to use and develop for artificial intelligence and machine learning.